7 research outputs found

    Local Contrast Enhancement Utilizing Bidirectional Switching Equalization Of Separated And Clipped Sub-Histograms

    Get PDF
    Digital image contrast enhancement methods that are based on histogram equalization (HE) technique are useful for the use in consumer electronic products due to their simple implementation. However, almost all the suggested enhancement methods are using global processing technique, which does not emphasize local contents. Kaedah penyerlahan beza jelas imej digit berdasarkan teknik penyeragaman histogram adalah berguna dalam penggunaan produk elektronik pengguna disebabkan pelaksanaan yang mudah. Walau bagaimanapun, kebanyakan kaedah penyerlahan yang dicadangkan adalah menggunakan teknik proses sejagat dan tidak menekan kepada kandungan setempat

    Local Contrast Enhancement Utilizing Bidirectional Switching Equalization of Separated and Clipped Subhistograms

    Get PDF
    Digital image contrast enhancement methods that are based on histogram equalization technique are still useful for the use in consumer electronic products due to their simple implementation. However, almost all the suggested enhancement methods are using global processing technique, which does not emphasize local contents. Therefore, this paper proposes a new local image contrast enhancement method, based on histogram equalization technique, which not only enhances the contrast, but also increases the sharpness of the image. Besides, this method is also able to preserve the mean brightness of the image. In order to limit the noise amplification, this newly proposed method utilizes local mean-separation, and clipped histogram bins methodologies. Based on nine test color images and the benchmark with other three histogram equalization based methods, the proposed technique shows the best overall performance

    ConvFaceNeXt: Lightweight Networks for Face Recognition

    No full text
    The current lightweight face recognition models need improvement in terms of floating point operations (FLOPs), parameters, and model size. Motivated by ConvNeXt and MobileFaceNet, a family of lightweight face recognition models known as ConvFaceNeXt is introduced to overcome the shortcomings listed above. ConvFaceNeXt has three main parts, which are the stem, bottleneck, and embedding partitions. Unlike ConvNeXt, which applies the revamped inverted bottleneck dubbed the ConvNeXt block in a large ResNet-50 model, the ConvFaceNeXt family is designed as lightweight models. The enhanced ConvNeXt (ECN) block is proposed as the main building block for ConvFaceNeXt. The ECN block contributes significantly to lowering the FLOP count. In addition to the typical downsampling approach using convolution with a kernel size of three, a patchify strategy utilizing a kernel size of two is also implemented as an alternative for the ConvFaceNeXt family. The purpose of adopting the patchify strategy is to reduce the computational complexity further. Moreover, blocks with the same output dimension in the bottleneck partition are added together for better feature correlation. Based on the experimental results, the proposed ConvFaceNeXt model achieves competitive or even better results when compared with previous lightweight face recognition models, on top of a significantly lower FLOP count, parameters, and model size

    ConvFaceNeXt: Lightweight Networks for Face Recognition

    No full text
    The current lightweight face recognition models need improvement in terms of floating point operations (FLOPs), parameters, and model size. Motivated by ConvNeXt and MobileFaceNet, a family of lightweight face recognition models known as ConvFaceNeXt is introduced to overcome the shortcomings listed above. ConvFaceNeXt has three main parts, which are the stem, bottleneck, and embedding partitions. Unlike ConvNeXt, which applies the revamped inverted bottleneck dubbed the ConvNeXt block in a large ResNet-50 model, the ConvFaceNeXt family is designed as lightweight models. The enhanced ConvNeXt (ECN) block is proposed as the main building block for ConvFaceNeXt. The ECN block contributes significantly to lowering the FLOP count. In addition to the typical downsampling approach using convolution with a kernel size of three, a patchify strategy utilizing a kernel size of two is also implemented as an alternative for the ConvFaceNeXt family. The purpose of adopting the patchify strategy is to reduce the computational complexity further. Moreover, blocks with the same output dimension in the bottleneck partition are added together for better feature correlation. Based on the experimental results, the proposed ConvFaceNeXt model achieves competitive or even better results when compared with previous lightweight face recognition models, on top of a significantly lower FLOP count, parameters, and model size

    LCAM: Low-Complexity Attention Module for Lightweight Face Recognition Networks

    No full text
    Inspired by the human visual system to concentrate on the important region of a scene, attention modules recalibrate the weights of either the channel features alone or along with spatial features to prioritize informative regions while suppressing unimportant information. However, the floating-point operations (FLOPs) and parameter counts are considerably high when one is incorporating these modules, especially for those with both channel and spatial attentions in a baseline model. Despite the success of attention modules in general ImageNet classification tasks, emphasis should be given to incorporating these modules in face recognition tasks. Hence, a novel attention mechanism with three parallel branches known as the Low-Complexity Attention Module (LCAM) is proposed. Note that there is only one convolution operation for each branch. Therefore, the LCAM is lightweight, yet it is still able to achieve a better performance. Experiments from face verification tasks indicate that LCAM achieves similar or even better results compared with those of previous modules that incorporate both channel and spatial attentions. Moreover, compared to the baseline model with no attention modules, LCAM achieves performance values of 0.84% on ConvFaceNeXt, 1.15% on MobileFaceNet, and 0.86% on ProxylessFaceNAS with respect to the average accuracy of seven image-based face recognition datasets

    Polypharmacy and potentially inappropriate medications among hospitalized older adults with COVID-19 in Malaysian tertiary hospitals

    No full text
    Abstract Introduction Older adults are among the most vulnerable groups during the COVID-19 epidemic, contributing to a large proportion of COVID-19-related death. Medication review and reconciliation by pharmacist can help reduce the number of potentially inappropriate medications but these services were halted during COVID-19. Aim To assess the prevalence and factors associated with inappropriate medicine use among older populations with COVID-19. Methods This was a cross-sectional, retrospective analysis of medications among hospitalized older adults with COVID-19. Potentially inappropriate medication use was categorized using the Beer’s and STOPP criteria. Results Combining both criteria, 181 (32.7%) of the 553 patients were identified to have used at least one or more potentially inappropriate medication. A marginally higher number of inappropriate medications was documented using the Beers 2019 criteria (151 PIM in 124 patients) compared to STOPP criteria (133 PIMS in 104 patients). The long-term use of proton pump inhibitors (n = 68; 12.3%) and drugs which increases the risk of postural hypotension were the most commonly reported PIM (n = 41; 7.4%). Potentially inappropriate medication use was associated with previous history of hospital admission in the past 12 months (Odds ratio [OR]: 2.27; 95% CI 1.29–3.99) and higher number of discharge medications. Conclusions Nearly, one in three older adults with COVID-19 had been prescribed a PIM, and the proportion of older adults with polypharmacy increased after discharge. This highlights the importance of having clinical pharmacist conducting medication reviews to identify PIMs and ensure medication appropriateness
    corecore